- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003000001000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Ji, Yangfeng (4)
-
Chen, Hannah (2)
-
Evans, David (2)
-
Chen, Hanjie (1)
-
Chu, Zhendong (1)
-
Qi, Yanjun (1)
-
Sekhon, Arshdeep (1)
-
Shrivastava, Aman (1)
-
Sun, Tong (1)
-
Wang, Hongning (1)
-
Wang, Zhe (1)
-
Wang, Zichao (1)
-
Zhang, Ruiyi (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Chu, Zhendong; Wang, Zichao; Zhang, Ruiyi; Ji, Yangfeng; Wang, Hongning; Sun, Tong (, 1st ICML Workshop on In-Context Learning at ICML 2024)
-
Sekhon, Arshdeep; Chen, Hanjie; Shrivastava, Aman; Wang, Zhe; Ji, Yangfeng; Qi, Yanjun (, Proceedings of the AAAI Conference on Artificial Intelligence)Recent NLP literature has seen growing interest in improving model interpretability. Along this direction, we propose a trainable neural network layer that learns a global interaction graph between words and then selects more informative words using the learned word interactions. Our layer, we call WIGRAPH, can plug into any neural network-based NLP text classifiers right after its word embedding layer. Across multiple SOTA NLP models and various NLP datasets, we demonstrate that adding the WIGRAPH layer substantially improves NLP models' interpretability and enhances models' prediction performance at the same time.more » « less
-
Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset Augmentation Using Graph TheoryChen, Hannah; Ji, Yangfeng; Evans, David (, Findings of ACL: Empirical Methods in Natural Language Processing)Most NLP datasets are manually labeled, so suffer from inconsistent labeling or limited size. We propose methods for automatically improving datasets by viewing them as graphs with expected semantic properties. We construct a paraphrase graph from the provided sentence pair labels, and create an augmented dataset by directly inferring labels from the original sentence pairs using a transitivity property. We use structural balance theory to identify likely mislabelings in the graph, and flip their labels. We evaluate our methods on paraphrase models trained using these datasets starting from a pretrained BERT model, and find that the automatically-enhanced training sets result in more accurate models.more » « less
An official website of the United States government

Full Text Available